Supervised Information-Theoretic Competitive Learning by Cost-Sensitive Information Maximization
نویسنده
چکیده
In this paper, we propose a new supervised learning method whereby information is controlled by the associated cost in an intermediate layer, and in an output layer, errors between targets and outputs are minimized. In the intermediate layer, competition is realized by maximizing mutual information between input patterns and competitive units with Gaussian functions. The process of information maximization is controlled by changing a cost associated with information. Thus, we can flexibly control the process of information maximization and to obtain internal representations appropriate to given problems. The new method is considered to be a hybrid model similar to the counter-propagation model, in which a competitive layer is combined with an output layer. In addition, this is considered to be a new approach to radial-basis function networks in which the center of classes can be determined by using information maximization. We applied our method to an artificial data problem, the prediction of long-term interest rates and yen rates. In all cases, experimental results showed that the cost can flexibly change internal representations, and the cost-sensitive method gave better performance than did the conventional methods.
منابع مشابه
Forced Information for Information-Theoretic Competitive Learning
We have proposed a new information-theoretic approach to competitive learning [1], [2], [3], [4], [5]. The information-theoretic method is a very flexible type of competitive learning, compared with conventional competitive learning. However, some problems have been pointed out concerning the information-theoretic method, for example, slow convergence. In this paper, we propose a new computatio...
متن کاملInformation-Theoretic Active SOM for Improving Generalization Performance
In this paper, we introduce a new type of information-theoretic method called “information-theoretic active SOM”, based on the self-organizing maps (SOM) for training multi-layered neural networks. The SOM is one of the most important techniques in unsupervised learning. However, SOM knowledge is sometimes ambiguous and cannot be easily interpreted. Thus, we introduce the information-theoretic ...
متن کاملDiscriminative Clustering by Regularized Information Maximization
Is there a principled way to learn a probabilistic discriminative classifier from an unlabeled data set? We present a framework that simultaneously clusters the data and trains a discriminative classifier. We call it Regularized Information Maximization (RIM). RIM optimizes an intuitive information-theoretic objective function which balances class separation, class balance and classifier comple...
متن کاملSquared-loss Mutual Information Regularization: A Novel Information-theoretic Approach to Semi-supervised Learning
We propose squared-loss mutual information regularization (SMIR) for multi-class probabilistic classification, following the information maximization principle. SMIR is convex under mild conditions and thus improves the nonconvexity of mutual information regularization. It offers all of the following four abilities to semi-supervised algorithms: Analytical solution, out-of-sample/multi-class cl...
متن کاملLinguistic Rule Acquisition by Information Maximization: Neural Networks Infer the Use of Donatory Verbs
In this paper, we propose a new information theoretic method for a linguistic rule acquisition problem, and demonstrate that a linguistic rule acquisition process is an instance of realization of information maximization. The new method is based upon unsupervised competitive learning. The unsupervised learning is needed because children acquire rules without any explicit instruction. In the exp...
متن کامل